81 research outputs found
Das Softwarearchiv – Eine Erfolgsbedingung für die Langzeitarchivierung digitaler Objekte
This paper emphasizes the need for legacy software archives to support digital preservation strategies as they depend on additional software components. Such a repository should contain past applications, pre-configured general software environments, special object dependent additions like fonts,
codecs and required helper applications. Same applies to metadata like operation manuals and license keys. Up to now not much strategic software archiving takes place. The issue is best solved on an inter-organizational level. Beside this proper legal frameworks not just on the national level are required
Migration-by-Emulation Planets Web-Service
The availability of migration tools for older formats is often limited. Thus we suggest a different approach: using the original applications to access the object and transfer the latter into formats which can be accessed in today's environments. The appropriate environment for the digital artefacts could be provided through emulation. With the reproduction of the original environment, a large and diverse set of migration input/output paths becomes available.
Working for the Open Planets Project the authors the authors created remotely accessible Web services integrated into the PLANETS testbed. These services demonstrate preservation workflows using migration together with the emulation of original environments
Das Softwarearchiv – Eine Erfolgsbedingung für die Langzeitarchivierung digitaler Objekte
This paper emphasizes the need for legacy software archives to support digital preservation strategies as they depend on additional software components. Such a repository should contain past applications, pre-configured general software environments, special object dependent additions like fonts,
codecs and required helper applications. Same applies to metadata like operation manuals and license keys. Up to now not much strategic software archiving takes place. The issue is best solved on an inter-organizational level. Beside this proper legal frameworks not just on the national level are required
Proceedings of the 5th bwHPC Symposium
In modern science, the demand for more powerful and integrated research
infrastructures is growing constantly to address computational challenges
in data analysis, modeling and simulation. The bwHPC initiative, founded
by the Ministry of Science, Research and the Arts and the universities in
Baden-Württemberg, is a state-wide federated approach aimed at assisting
scientists with mastering these challenges. At the 5th bwHPC Symposium
in September 2018, scientific users, technical operators and government
representatives came together for two days at the University of Freiburg. The
symposium provided an opportunity to present scientific results that were
obtained with the help of bwHPC resources. Additionally, the symposium served
as a platform for discussing and exchanging ideas concerning the use of these
large scientific infrastructures as well as its further development
Feeding the Masses: DNBD3. Simple, efficient, redundant block device for large scale HPC, Cloud and PC pool installations
In computer center operations many sites operate large PC lecture pools or HPC
clusters which can require similar or identical operating system images and software
packages. Booting over the LAN allows instantaneously usable systems but
requires the efficient provisioning of the root file system. Traditionally, general
purpose file systems like NFS are used, but read-only Network Block Devices like
the presented DNBD3 provide a range of attractive features, which can outperform
alternatives across a range of situations. DNBD3 not only allows for caching
and proxying at various levels, but it comes with a built-in performance monitor,
versioning, and failover functionality. DNBD3 has been under development at
Freiburg University for the past few years. It is released under a GPLv2 license,
and consists of a Linux kernel module for the clients, and a user space executable
for the servers. It is running in production for two highly heterogeneous use cases:
as a distributed setup of campus-wide computer pools with more than 400 connected
machines, and in the 1000+ node compute cluster backing the Freiburg
HPC and Clouds. Aggressive local caching might even allow the use of mobile
clients on WLAN infrastructures in stateless Linux operation
Migration-by-Emulation Planets Web-Service
The availability of migration tools for older formats is often limited. Thus we suggest a different approach: using the original applications to access the object and transfer the latter into formats which can be accessed in today's environments. The appropriate environment for the digital artefacts could be provided through emulation. With the reproduction of the original environment, a large and diverse set of migration input/output paths becomes available.
Working for the Open Planets Project the authors the authors created remotely accessible Web services integrated into the PLANETS testbed. These services demonstrate preservation workflows using migration together with the emulation of original environments
Virtualized Research Environments on the bwForCluster NEMO
The bwForCluster NEMO offers high performance
computing resources to three quite different scientific communities
(Elementary Particle Physics, Neuroscience and Microsystems
Engineering) encompassing more than 200 individual
researchers. To provide a broad range of software packages and
deal with the individual requirements, the NEMO operators seek
novel approaches to cluster operation [1]. Virtualized Research
Environments (VREs) can help to both separate different software
environments as well as the responsibilities for maintaining
the software stack. Research groups become more independent
from the base software environment defined by the cluster
operators. Operating VREs brings advantages like scientific
reproducibility, but may introduce caveats like lost cycles or the
need for layered job scheduling. VREs might open advanced
possibilities as e.g. job migration or checkpointing
Kosten und Aufwände von Forschungsdatenmanagement
Der Beitrag widmet sich der Frage, in welchem Umfang und an welcher Stelle Kosten als auch Vorteile durch Forschungsdatenmanagement anfallen und wie diese potenziell unter den Stakeholdern zu verteilen sind. Dabei werden Handlungsfelder, Akteurinnen und Akteure identifiziert und mögliche Handlungsempfehlungen formuliert, die in der universitären Praxis an den verschiedenen Stellen des Forschungsprozesses Anwendung finden könnten. Kosten und Aufwände lassen sich zeitlich über den Lebenszyklus eines Projektes verorten und können zugleich Metrik für die Planung von Datenmanagement und die langfristige Ablagen von Daten sein. Die zunehmende Ausdifferenzierung der Fachgebiete und der nachhaltige Umgang mit Forschungsdaten erfordern zusätzliche Qualifikationen, die sich in neuen Tätigkeitsfeldern niederschlagen. Wie andere Kosten auch, können Aufwände im Bereich Forschungsdatenmanagement bei der Beantragung von Fördergeldern berücksichtigt werden
ViCE - Uniform Approach to Large-Scale Research Infrastructures : Provisioning & Deployment
Das Projekt ViCE (Virtual Open Science Collaboration Environment) unterstützt Wissenschaftler unterschiedlicher Fachdisziplinen beim Aufbau und der Anpassung virtueller Forschungsumgebungen. Hierbei soll als wichtige Basisinfrastruktur eine übergreifende Kollaborations-Plattform entstehen, durch die eine langfristige Nachnutzung von Forschungsergebnissen besonders im Hinblick auf neue Fragestellungen in der Wissenschaft gewährleistet werden kann.
Ziel hierbei ist, es den Wissenschaftlern zu ermöglichen, unterschiedliche Versionsstände ihrer virtuellen Forschungsumgebungen und Forschungsdaten prozessbegleitend dokumentieren und auch anderen Forschenden zur Verfügung stellen zu können. Die Plattform wird gemeinsam mit den Infrastrukturpartnern Freiburg, Tübingen und Mannheim (HPC, bwCloud, bwLehrpool) exemplarisch für die Fach-Communities Anglistik, Wirtschaftsinformatik, Lebenswissenschaften und Teilchenphysik bereitgestellt. Als wissenschaftlicher Dienst soll sie auch weiteren Disziplinen langfristig zur Verfügung stehen und in der Lehre und Integration des wissenschaftlichen Nachwuchses zum Einsatz kommen
Game of Templates. Deploying and (re-)using Virtualized Research Environments in High-Performance and High-Throughput Computing
The Virtual Open Science Collaboration Environment project worked on different
use cases to evaluate the necessary steps for virtualization or containerization
especially when considering the external dependencies of digital workflows. Virtualized
Research Environments (VRE) can both help to broaden the user base of an
HPC cluster like NEMO and offer new forms of packaging scientific workflows as
well as managing software stacks. The eResearch initiative on VREs sponsored by
the state of Baden-Württemberg provided the necessary framework for both the
researchers of various disciplines as well as the providers of (large-scale) compute
infrastructures to define future operational models of HPC clusters and scientific
clouds. In daily operations, VREs running on virtualization or containerization
technologies such as OpenStack or Singularity help to disentangle the responsibilities
regarding the software stacks needed to fulfill a certain task. Nevertheless,
the reproduction of VREs as well as the provisioning of research data to be computed
and stored afterward creates a couple of challenges which need to be solved
beyond the traditional scientific computing models
- …